Goto

Collaborating Authors

 covariance structure


Covariance-Based Structural Equation Modeling in Small-Sample Settings with $p>n$

Hasegawa, Hiroki, Tamura, Aoba, Okada, Yukihiko

arXiv.org Machine Learning

Factor-based Structural Equation Modeling (SEM) relies on likelihood-based estimation assuming a nonsingular sample covariance matrix, which breaks down in small-sample settings with $p>n$. To address this, we propose a novel estimation principle that reformulates the covariance structure into self-covariance and cross-covariance components. The resulting framework defines a likelihood-based feasible set combined with a relative error constraint, enabling stable estimation in small-sample settings where $p>n$ for sign and direction. Experiments on synthetic and real-world data show improved stability, particularly in recovering the sign and direction of structural parameters. These results extend covariance-based SEM to small-sample settings and provide practically useful directional information for decision-making.


Identification and Inference in Nonlinear Dynamic Network Models

Vallarino, Diego

arXiv.org Machine Learning

We study identification and inference in nonlinear dynamic systems defined on unknown interaction networks. The system evolves through an unobserved dependence matrix governing cross-sectional shock propagation via a nonlinear operator. We show that the network structure is not generically identified, and that identification requires sufficient spectral heterogeneity. In particular, identification arises when the network induces non-exchangeable covariance patterns through heterogeneous amplification of eigenmodes. When the spectrum is concentrated, dependence becomes observationally equivalent to common shocks or scalar heterogeneity, leading to non-identification. We provide necessary and sufficient conditions for identification, characterize observational equivalence classes, and propose a semiparametric estimator with asymptotic theory. We also develop tests for network dependence whose power depends on spectral properties of the interaction matrix. The results apply to a broad class of economic models, including production networks, contagion models, and dynamic interaction systems.


A Bayesian method for reducing bias in neural representational similarity analysis

Neural Information Processing Systems

In neuroscience, the similarity matrix of neural activity patterns in response to different sensory stimuli or under different cognitive states reflects the structure of neural representational space. Existing methods derive point estimations of neural activity patterns from noisy neural imaging data, and the similarity is calculated from these point estimations. We show that this approach translates structured noise from estimated patterns into spurious bias structure in the resulting similarity matrix, which is especially severe when signal-to-noise ratio is low and experimental conditions cannot be fully randomized in a cognitive task. We propose an alternative Bayesian framework for computing representational similarity in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data, and directly estimate this covariance structure from imaging data while marginalizing over the unknown activity patterns. Converting the estimated covariance structure into a correlation matrix offers a much less biased estimate of neural representational similarity. Our method can also simultaneously estimate a signal-to-noise map that informs where the learned representational structure is supported more strongly, and the learned covariance matrix can be used as a structured prior to constrain Bayesian estimation of neural activity patterns.





45c166d697d65080d54501403b433256-AuthorFeedback.pdf

Neural Information Processing Systems

The reviewers2 acknowledge that the ideas presented inthe paper are compelling, sound and appear tobeeffective(R3), offering a3 great add to the GP literature (R1) which is also supported by a solid and an interesting theoretical foundation (R2,4 R4). Existing multi-output GP models are not applicable to our setting (see line 79-83) and are thus not16 comparabletotheDAG-GPmodel. Wehavefurther clarified this point in Section 1.2.



0d5a4a5a748611231b945d28436b8ece-Paper.pdf

Neural Information Processing Systems

Neural networkmodels areknowntoreinforce hidden databiases, making them unreliable and difficult to interpret. We seek to build models that'know whatthey do not know' by introducing inductive biases in the function space.


ContinuousMean-CovarianceBandits

Neural Information Processing Systems

Specifically,inCMCB, there isalearner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions. The agent's objective is to achieve the best trade-off between reward and risk, measured with option covariance. To capture different reward observation scenarios in practice, we considerthreefeedbacksettings,i.e.,full-information,semi-banditandfull-bandit feedback. Wepropose novelalgorithms withoptimal regrets(within logarithmic factors), and provide matching lower bounds to validate their optimalities. The experimental results also demonstrate the superiority of our algorithms.